total error
Optimality and Stability in Federated Learning: A Game-theoretic Approach
Federated learning is a distributed learning paradigm where multiple agents, each only with access to local data, jointly learn a global model. There has recently been an explosion of research aiming not only to improve the accuracy rates of federated learning, but also provide certain guarantees around social good properties such as total error. One branch of this research has taken a game-theoretic approach, and in particular, prior work has viewed federated learning as a hedonic game, where error-minimizing players arrange themselves into federating coalitions. This past work proves the existence of stable coalition partitions, but leaves open a wide range of questions, including how far from optimal these stable solutions are. In this work, we motivate and define a notion of optimality given by the average error rates among federating agents (players).
- North America > Canada > British Columbia > Vancouver (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Middle East > Jordan (0.04)
- (6 more...)
Optimality and Stability in Federated Learning: A Game-theoretic Approach
Federated learning is a distributed learning paradigm where multiple agents, each only with access to local data, jointly learn a global model. There has recently been an explosion of research aiming not only to improve the accuracy rates of federated learning, but also provide certain guarantees around social good properties such as total error. One branch of this research has taken a game-theoretic approach, and in particular, prior work has viewed federated learning as a hedonic game, where error-minimizing players arrange themselves into federating coalitions. This past work proves the existence of stable coalition partitions, but leaves open a wide range of questions, including how far from optimal these stable solutions are. In this work, we motivate and define a notion of optimality given by the average error rates among federating agents (players).
Exact and approximate error bounds for physics-informed neural networks
Chantada, Augusto T., Protopapas, Pavlos, Bachar, Luca Gomez, Landau, Susana J., Scóccola, Claudia G.
The use of neural networks to solve differential equations, as an alternative to traditional numerical solvers, has increased recently. However, error bounds for the obtained solutions have only been developed for certain equations. In this work, we report important progress in calculating error bounds of physics-informed neural networks (PINNs) solutions of nonlinear first-order ODEs. We give a general expression that describes the error of the solution that the PINN-based method provides for a nonlinear first-order ODE. In addition, we propose a technique to calculate an approximate bound for the general case and an exact bound for a particular case. The error bounds are computed using only the residual information and the equation structure. We apply the proposed methods to particular cases and show that they can successfully provide error bounds without relying on the numerical solution.
- South America > Argentina > Pampas > Buenos Aires F.D. > Buenos Aires (0.05)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
Error Bounds for Deep Learning-based Uncertainty Propagation in SDEs
Kong, Chun-Wei, Laurenti, Luca, McMahon, Jay, Lahijanian, Morteza
Stochastic differential equations are commonly used to describe the evolution of stochastic processes. The uncertainty of such processes is best represented by the probability density function (PDF), whose evolution is governed by the Fokker-Planck partial differential equation (FP-PDE). However, it is generally infeasible to solve the FP-PDE in closed form. In this work, we show that physics-informed neural networks (PINNs) can be trained to approximate the solution PDF using existing methods. The main contribution is the analysis of the approximation error: we develop a theory to construct an arbitrary tight error bound with PINNs. In addition, we derive a practical error bound that can be efficiently constructed with existing training methods. Finally, we explain that this error-bound theory generalizes to approximate solutions of other linear PDEs. Several numerical experiments are conducted to demonstrate and validate the proposed methods.
- North America > United States > Colorado > Boulder County > Boulder (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
Optimality and Stability in Federated Learning: A Game-theoretic Approach
Federated learning is a distributed learning paradigm where multiple agents, each only with access to local data, jointly learn a global model. There has recently been an explosion of research aiming not only to improve the accuracy rates of federated learning, but also provide certain guarantees around social good properties such as total error. One branch of this research has taken a game-theoretic approach, and in particular, prior work has viewed federated learning as a hedonic game, where error-minimizing players arrange themselves into federating coalitions. This past work proves the existence of stable coalition partitions, but leaves open a wide range of questions, including how far from optimal these stable solutions are. In this work, we motivate and define a notion of optimality given by the average error rates among federating agents (players).
Error Estimation for Physics-informed Neural Networks Approximating Semilinear Wave Equations
Lorenz, Beatrice, Bacho, Aras, Kutyniok, Gitta
Solving these equations analytically is often challenging or even impossible, necessitating the utilization of other methods to obtain approximate solutions. One way to find approximate solutions to partial differential equations is through classical numerical methods. These methods have been studied for years and already have strong theoretical foundations when it comes to error estimation [1]. However, in recent years, with the rise of machine learning as a whole, there has also been an increased interest in applying machine learning methods to the problem of finding approximate solutions to PDEs. As universal function approximators [2], deep neural networks provide a promising avenue for a multitude of approaches to the approximation of solutions to partial differential equations. Among these methods are neural operators, methods based on the Feynman-Kac formula, and methods for parametric PDEs [3] [4] [5]. This paper focuses on physics-informed neural networks (PINNs), which were conceived as feed-forward neural networks that incorporate the dynamics of the PDE into their loss function [6].
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- North America > United States > New York (0.04)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- Asia > India > Tripura (0.04)
Meta-learning to Calibrate Gaussian Processes with Deep Kernels for Regression Uncertainty Estimation
Iwata, Tomoharu, Kumagai, Atsutoshi
Although Gaussian processes (GPs) with deep kernels have been successfully used for meta-learning in regression tasks, its uncertainty estimation performance can be poor. We propose a meta-learning method for calibrating deep kernel GPs for improving regression uncertainty estimation performance with a limited number of training data. The proposed method meta-learns how to calibrate uncertainty using data from various tasks by minimizing the test expected calibration error, and uses the knowledge for unseen tasks. We design our model such that the adaptation and calibration for each task can be performed without iterative procedures, which enables effective meta-learning. In particular, a task-specific uncalibrated output distribution is modeled by a GP with a task-shared encoder network, and it is transformed to a calibrated one using a cumulative density function of a task-specific Gaussian mixture model (GMM). By integrating the GP and GMM into our neural network-based model, we can meta-learn model parameters in an end-to-end fashion. Our experiments demonstrate that the proposed method improves uncertainty estimation performance while keeping high regression performance compared with the existing methods using real-world datasets in few-shot settings.
- North America (0.04)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
PINNs error estimates for nonlinear equations in $\mathbb{R}$-smooth Banach spaces
Gao, Jiexing, Zakharian, Yurii
In 2017, M. Raissi et al. introduced the Physics-informed neural network (PINN) approximating solutions to partial differential equations (PDEs) [29, 30]. It reduces losses related to PDE and boundary/initial conditions. In recent years, the number of papers dedicated to deep learning methods for solving PDEs, including PINNs, is constantly increasing (see, for instance, [9, 20, 24, 21] for deep learning methods and [14, 16, 17, 18, 31, 32] for PINN). Consequently, a thorough exploration of the theoretical aspects associated with PINNs is of great significance. For instance, the question arises as to why PINN's training algorithm leads us to an accurate approximation. In other words, is it possible to control total error for sufficiently small residuals/training error? In [25], S. Mishra and R. Molinaro presented an error estimation answering this question and offered an operator description of the sufficient conditions for applying such a method. Yet, these conditions, while initially outlined rather generally, in practice, require obtaining the estimate itself to verify them.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Russia > Central Federal District > Moscow Oblast > Moscow (0.04)
Statistical and Computational Guarantees for Influence Diagnostics
Fisher, Jillian, Liu, Lang, Pillutla, Krishna, Choi, Yejin, Harchaoui, Zaid
Statistical machine learning models have been increasingly used in fully or partially automatized data analysis processes and artificial intelligence applications (Rudin, 2019). The automatizing of decisions impacting the society inspire a parallel effort to develop methods to identify the factors impacting specific decisions. The heightened scrutiny on the way statistical models now operate at a large scale and at a fast pace has led to a renewed interest in statistical diagnostics such as the influence function (Cook and Weisberg, 1982; Koh and Liang, 2017; Schioppa et al., 2022; Louvet et al., 2022). The influence function or curve of a statistical estimator has been proposed to measure the sensitivity of the estimator to individual datapoints. Computing the influence of a particular datapoint boils down to computing an inverse-Hessian-vector product. Due to a greater focus on least-squares-type estimator with small samples, the computational aspects have received relatively little attention until recently (Koh and Liang, 2017; Schioppa et al., 2022), while the statistical aspects have mainly focused on large sample classical asymptotics (Rousseeuw et al., 2011; Avella-Medina, 2017). The statistical analysis of influence functions for generalized linear models presents several challenges.
- North America > United States > Oregon (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York (0.04)
- (4 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)